Analog (or analogue) television is the analog transmission that involves the broadcasting of encoded analog audio and analog video signal:[1] one in which the message conveyed by the broadcast signal is a function of deliberate variations in the amplitude and/or frequency of the signal. All broadcast television systems preceding digital transmission of digital television (DTV) were systems utilizing analog signals. Analog television may be wireless or can require copper wire used by cable converters.
Contents |
The earliest mechanical television systems used spinning disks with patterns of holes punched into the disc to "scan" an image. A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. However these mechanical systems were slow, the images were dim and flickered severely, and the image resolution very low. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work.
Analog television did not really begin as an industry until the development of the cathode-ray tube (CRT), which uses a steered electron beam to "write" lines of electrons across a phosphor coated surface. The electron beam could be swept across the screen much faster than any mechanical disc system, allowing for more closely spaced scan lines and much higher image resolution, while slow-fade phosphors removed image flicker effects. Also far less maintenance was required of an all-electronic system compared to a spinning disc system.
Broadcasters using analog television systems encode their signal using NTSC, PAL or SECAM analog encoding[2] and then use RF modulation to modulate this signal onto a Very high frequency (VHF) or Ultra high frequency (UHF) carrier. Each frame of a television image is composed of lines drawn on the screen. The lines are of varying brightness; the whole set of lines is drawn quickly enough that the human eye perceives it as one image. The next sequential frame is displayed, allowing the depiction of motion. The analog television signal contains timing and synchronization information so that the receiver can reconstruct a two-dimensional moving image from a one-dimensional time-varying signal.
In many countries, over-the-air broadcast television of analog audio and analog video signals is being discontinued, to allow the re-use of the television broadcast radio spectrum for other services such as datacasting and subchannels.
The first commercial television systems were black-and-white; The beginning of color television was in the 1950s.[3]
A practical television system needs to take luminance, chrominance (in a color system), synchronization (horizontal and vertical), and audio signals, and broadcast them over a radio transmission. The transmission system must include a means of television channel selection.
Analog broadcast television systems come in a variety of frame rates and resolutions. Further differences exist in the frequency and modulation of the audio carrier. The monochrome combinations still existing in the 1950s are standardized by the International Telecommunication Union (ITU) as capital letters A through N. When color television was introduced, the hue and saturation information was added to the monochrome signals in a way that black & white televisions ignore. This way backwards compatibility was achieved. That concept is true for all analog television standards.
However there are three standards for the way the additional color information can be encoded and transmitted. The first was the American NTSC (National Television Systems Committee) color television system. The European/Australian PAL (Phase Alternation Line rate) and the French-Former Soviet Union SECAM (Séquentiel Couleur Avec Mémoire) standard were developed later and attempt to cure certain defects of the NTSC system. PAL's color encoding is similar to the NTSC systems. SECAM, though, uses a different modulation approach than PAL or NTSC.
In principle all three color encoding systems can be combined with any scan line/frame rate combination. Therefore, in order to describe a given signal completely, it's necessary to quote the color system and the broadcast standard as capital letter. For example the United States uses NTSC-M, the UK uses PAL-I, France uses SECAM-L, much of Western Europe and Australia uses PAL-B/G, most of Eastern Europe uses PAL-D/K or SECAM-D/K and so on.
However not all of these possible combinations actually exist. NTSC is currently only used with system M, even though there were experiments with NTSC-A (405 line) and NTSC-I (625 line) in the UK. PAL is used with a variety of 625-line standards (B,G,D,K,I,N) but also with the North American 525-line standard, accordingly named PAL-M. Likewise, SECAM is used with a variety of 625-line standards.
For this reason many people refer to any 625/25 type signal as "PAL" and to any 525/30 signal as "NTSC", even when referring to digital signals, for example, on DVD-Video which don't contain any analog color encoding, thus no PAL or NTSC signals at all. Even though this usage is common, it is misleading as that is not the original meaning of the terms PAL/SECAM/NTSC.
Although a number of different broadcast television systems were in use worldwide, the same principles of operation apply.[4]
A cathode-ray tube (CRT) television displays an image by scanning a beam of electrons across the screen in a pattern of horizontal lines known as a raster. At the end of each line the beam returns to the start of the next line; at the end of the last line it returns to the top of the screen. As it passes each point the intensity of the beam is varied, varying the luminance of that point. A color television system is identical except that an additional signal known as chrominance controls the color of the spot.
Raster scanning is shown in a slightly simplified form below.
When analog television was developed, no affordable technology for storing any video signals existed; the luminance signal has to be generated and transmitted at the same time at which it is displayed on the CRT. It is therefore essential to keep the raster scanning in the camera (or other device for producing the signal) in exact synchronization with the scanning in the television.
The physics of the CRT require that a finite time interval is allowed for the spot to move back to the start of the next line (horizontal retrace) or the start of the screen (vertical retrace). The timing of the luminance signal must allow for this.
The human eye has a characteristic called Persistence of vision. Quickly displaying successive scan images will allow the apparent illusion of smooth motion. Flickering of the image can be partially solved using a long persistence phosphor coating on the CRT, so that successive images fade slowly. However, slow phosphor has the negative side-effect of causing image smearing and blurring when there is a large amount of rapid on-screen motion occurring.
The maximum frame rate depends on the bandwidth of the electronics and the transmission system, and the number of horizontal scan lines in the image. A frame rate of 25 or 30 hertz is a satisfactory compromise, while the process of interlacing two video fields of the picture per frame is used to build the image. This process doubles the apparent number of video fields per second and further reduces flicker and other defects in transmission.
Plasma screens and LCD screens have been used in analog television sets. These types of display screens use lower voltages than older CRT displays. Many dual system television receivers, equipped to receive both analog transmissions and digital transmissions have analog tuner (television) receiving capability and must use a television antenna.
The television system for each country will specify a number of television channels within the UHF or VHF frequency ranges. A channel actually consists of two signals: the picture information is transmitted using amplitude modulation on one frequency, and the sound is transmitted with frequency modulation at a frequency at a fixed offset (typically 4.5 to 6 MHz) from the picture signal.
The channel frequencies chosen represent a compromise between allowing enough bandwidth for video (and hence satisfactory picture resolution), and allowing enough channels to be packed into the available frequency band. In practice a technique called vestigial sideband is used to reduce the channel spacing, which would be at least twice the video bandwidth if pure AM was used.
Signal reception is invariably done via a superheterodyne receiver: the first stage is a tuner which selects a television channel and frequency-shifts it to a fixed intermediate frequency (IF). The signal amplifier (from the microvolt range to fractions of a volt) performs amplification to the IF stages.
At this point the IF signal consists of a video carrier wave at one frequency and the sound carrier at a fixed offset. A demodulator recovers the video signal and sound as an FM signal at the offset frequency (this is known as intercarrier sound).
The FM sound carrier is then demodulated, amplified, and used to drive a loudspeaker. Until the advent of the NICAM and MTS systems, TV sound transmissions were invariably monophonic.
The video carrier is demodulated to give a composite video signal; this contains luminance, chrominance and synchronization signals;[5] this is identical to the video signal format used by analog video devices such as VCRs or CCTV cameras. Note that the RF signal modulation is inverted compared to the conventional AM: the minimum video signal level corresponds to maximum carrier amplitude, and vice versa. The carrier is never shut off altogether; this is to ensure that intercarrier sound demodulation can still occur.
Each line of the displayed image is transmitted using a signal as shown above. The same basic format (with minor differences mainly related to timing and the encoding of color) is used for PAL, NTSC and SECAM television systems. A monochrome signal is identical to a color one, with the exception that the elements shown in color in the diagram (the color burst, and the chrominance signal) are not present.
The front porch is a brief (about 1.5 microsecond) period inserted between the end of each transmitted line of picture and the leading edge of the next line sync pulse. Its purpose was to allow voltage levels to stabilise in older televisions, preventing interference between picture lines. The front porch is the first component of the horizontal blanking interval which also contains the horizontal sync pulse and the back porch.[6][7]
The back porch is the portion of each scan line between the end (rising edge) of the horizontal sync pulse and the start of active video. It is used to restore the black level (300 mV.) reference in analog video. In signal processing terms, it compensates for the fall time and settling time following the sync pulse.[6][7]
In color TV systems such as PAL and NTSC, this period also includes the colorburst signal. In the SECAM system it contains the reference subcarrier for each consecutive color difference signal in order to set the zero-color reference.
In some professional systems, particularly satellite links between locations, the audio is embedded within the back porch of the video signal, to save the cost of renting a second channel.
The luminance component of a composite video signal varies between 0 V and approximately 0.7 V above the 'black' level. In the NTSC system, there is a blanking signal level used during the front porch and back porch, and a black signal level 75 mV above it; in PAL and SECAM these are identical.
In a monochrome receiver the luminance signal is amplified to drive the control grid in the electron gun of the CRT. This changes the intensity of the electron beam and therefore the brightness of the spot being scanned. Brightness and contrast controls determine the DC shift and amplification, respectively.
A color signal conveys picture information for each of the red, green, and blue components of an image (see the article on Color space for more information). However, these are not simply transmitted as three separate signals, because:
Instead, the RGB signals are converted into YUV form, where the Y signal represents the overall brightness, and can be transmitted as the luminance signal. This ensures a monochrome receiver will display a correct picture. The U and V signals are the difference between the Y signal and the B and R signals respectively. The U signal then represents how "blue" the color is, and the V signal how "red" it is. The advantage of this scheme is that the U and V signals are zero when the picture has no color content. Since the human eye is more sensitive to errors in luminance than in color, the U and V signals can be transmitted in a relatively lossy (specifically: bandwidth-limited) way with acceptable results. The G signal is not transmitted in the YUV system, but rather it is recovered electronically at the receiving end.
In the NTSC and PAL color systems, U and V are transmitted by adding a color subcarrier to the composite video signal, and using quadrature amplitude modulation on it. For NTSC, the subcarrier is usually at about 3.58 MHz, but for the PAL system it is at about 4.43 MHz. These frequencies are within the luminance signal band, but their exact frequencies were chosen such that they are midway between two harmonics of the horizontal line repetition rate, thus ensuring that the majority of the power of the luminance signal does not overlap with the power of the chrominance signal.
In the British PAL (D) system, the actual chrominance center frequency is 4.43361875 MHz, a direct multiple of the scan rate frequency. This frequency was chosen to minimize the chrominance beat interference pattern that would be visible in areas of high color saturation in the transmitted picture.
The two signals (U and V) modulate both the amplitude and phase of the color carrier, so to demodulate them it is necessary to have a reference signal against which to compare it. For this reason, a short burst of reference signal known as the color burst is transmitted during the back porch (re-trace period) of each scan line. A reference oscillator in the receiver locks onto this signal (see phase-locked loop) to achieve a phase reference, and uses its amplitude to set an AGC system to achieve an amplitude reference.
The U and V signals are then demodulated by band-pass filtering to retrieve the color subcarrier, mixing it with the in-phase and quadrature signals from the reference oscillator, and low-pass filtering the results.
NTSC uses this process unmodified. Unfortunately, this often results in poor color reproduction due to phase errors in the received signal. The PAL D (delay) system corrects this by reversing the phase of the signal on each successive line, and the averaging the results over pairs of lines. This process is achieved by the use of a 1H (where H = horizontal scan frequency) duration delay line. (A typical circuit used with this device converts the low frequency color signal to ultrasonic sound and back again). Phase shift errors between successive lines are therefore cancelled out and the wanted signal amplitude is increased when the two in-phase (coincident) signals are re-combined.
In the SECAM television system, U and V are transmitted on alternate lines, using simple frequency modulation of two different color subcarriers.
In analog color CRT displays, the brightness control signal (luminance) is fed to the cathode connections of the electron guns, and the color difference signals (chrominance signals) are fed to the control grids connections. This simple matrix mixing technique was replaced in later solid state designs of signal processing.
Synchronizing pulses added to the video signal at the end of every scan line and video frame ensure that the sweep oscillators in the receiver remain locked in step with the transmitted signal, so that the image can be reconstructed on the receiver screen.[6] [7] [8]
A sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. (see section below - Other technical information, for extra detail.)
The horizontal synchronization pulse (horizontal sync HSYNC), separates the scan lines. The horizontal sync signal is a single short pulse which indicates the start of every line. The rest of the scan line follows, with the signal ranging from 0.3 V (black) to 1 V (white), until the next horizontal or vertical synchronization pulse.
The format of the horizontal sync pulse varies. In the 525-line NTSC system it is a 4.85 µs-long pulse at 0 V. In the 625-line PAL system the pulse is 4.7 µs synchronization pulse at 0 V . This is lower than the amplitude of any video signal (blacker than black) so it can be detected by the level-sensitive "sync stripper" circuit of the receiver.
Vertical synchronization (Also vertical sync or V-SYNC) separates the video fields. In PAL and NTSC, the vertical sync pulse occurs within the vertical blanking interval. The vertical sync pulses are made by prolonging the length of HSYNC pulses through almost the entire length of the scan line.
The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or mid-way through).
The format of such a signal in 525-line NTSC is:
Each pre- or post- equalizing pulse consists in half a scan line of black signal: 2 µs at 0 V, followed by 30 µs at 0.3 V.
Each long sync pulse consists in an equalizing pulse with timings inverted: 30 µs at 0 V, followed by 2 µs at 0.3 V.
In video production and computer graphics, changes to the image are often kept in step with the vertical synchronization pulse to avoid visible discontinuity of the image. Since the frame buffer of a computer graphics display imitates the dynamics of a cathode-ray display, if it is updated with a new image while the image is being transmitted to the display, the display shows a mishmash of both frames, producing a page tearing artifact partway down the image.
Vertical synchronization eliminates this by timing frame buffer fills to coincide with the vertical blanking interval, thus ensuring that only whole frames are seen on-screen. Software such as video games and computer aided design (CAD) packages often allow vertical synchronization as an option, because it delays the image update until the vertical blanking interval. This produces a small penalty in latency, because the program has to wait until the video controller has finished transmitting the image to the display before continuing. Triple buffering reduces this latency significantly.
Two timing intervals are defined - the front porch between the end of displayed video and the start of the sync pulse, and the back porch after the sync pulse and before displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line.
The lack of precision timing components available in early television receivers meant that the timebase circuits occasionally needed manual adjustment. The adjustment took the form of horizontal hold and vertical hold controls, usually on the rear of the television set. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen.
As of late 2009, ten countries had completed the process of turning off analog terrestrial broadcasting. Many other countries had plans to do so or were in the process of a staged conversion. The first country to make a wholesale switch to digital over-the-air (terrestrial television) broadcasting was Luxembourg in 2006, followed later in 2006 by the Netherlands; in 2007 by Finland, Andorra, Sweden, Norway, and Switzerland; in 2008 by Belgium (Flanders) and Germany; in 2009 by the United States (high power stations—the important ones), southern Canada, the Isle of Man, Norway, and Denmark. In 2010, Belgium (Wallonia), Spain, Wales, Latvia, Estonia, the Channel Islands, and Slovenia; in 2011 Israel, Austria, Monaco, Scotland, Cyprus, Japan (excluding Miyagi, Iwate, and Fukushima Prefectures), Malta and France completed the transition.
In the United States, high-power over-the-air broadcasts are solely in the ATSC digital format since June 12, 2009, the date that the Federal Communications Commission (FCC) set for the end of all high-power analog TV transmissions. As a result, almost two million households could no longer watch TV because they were not prepared for the transition. The switchover was originally scheduled for February 17, 2009, until the U.S. Congress passed the DTV Delay Act.[9] By special dispensation, some analog TV signals ceased on the original date.[10] While the majority of the viewers of over-the-air broadcast television in the U.S. watch full-power stations (which number about 1800), there are three other categories of TV stations in the U.S.: low-power broadcasting stations, Class A stations, and TV translator stations. There is presently no deadline for these stations, about 7100 in number, to convert to digital broadcasting.
It is necessary to be cognizant of the fact that in broadcasting, whatever happens in the United States also happens simultaneously in southern Canada and in northern Mexico because those areas are covered by TV stations in the U.S. Furthermore, the major cities of southern Canada made their transitions to digital TV broadcasts simultaneously with the U.S.: Toronto, Montreal, Vancouver, Ottawa, Winnipeg, Sault Ste. Marie, Quebec City, Charlottetown, Halifax, and so forth.
In Japan, the switch to digital occurred on the 24th of July, 2011 (with the exception of Fukushima, Iwate, and Miyagi prefectures, where conversion was delayed one year due to complications from the 2011 Tōhoku earthquake and tsunami). In Canada, it is scheduled to happen August 31, 2011. China is scheduled to switch in 2015. In the United Kingdom, the digital switchover has different times for each part of the country. However, the entire U.K. should be on digital TV by 2012.
Brazil switched to digital TV on December 2, 2007, in its major cities, and now it is estimated that it will take about seven years for complete conversion over all of Brazil—but understand that large parts of Brazil are unpopulated by people who have electricity and TV. Australia will turn off analog TV in steps, TV network by network, between 2010 and 2013, region by region.[11]
In Malaysia, the Malaysian Communications & Multimedia Commission (MCMC) advertised for tender bids to be submitted in the third quarter of 2009 for the 470 through 742 MHz UHF allocation, to enable Malaysia's broadcast system to move into DTV. The new broadcast band allocation would result in Malaysia's having to build an infrastructure for all broadcasters, using a single digital terrestrial transmission/TV broadcast (DTTB) channel.
People also need to understand that large portions of Malaysia are covered by TV broadcasts from Singapore, Thailand, Brunei, and/or Indonesia (from Borneo).
Users may then encode and transmit their television programs on this channels` digital data stream. The winner was to be announced at the end of 2009 or early 2010. A condition of the award is that digital transmission must start as soon as possible, and analog switch-off was proposed for 2015. The scheme may not go ahead as the Government successor, Najib Tun Razak deferred the transition indefinitely in favor of his own 1Malaysia concept, which means that analog television will continue for longer than originally planned.
A typical analog television receiver is based around the block diagram shown below:
Image synchronization is achieved by transmitting negative-going pulses; in a composite video signal of 1 volt amplitude, these are approximately 0.3 V below the "black level". The horizontal sync signal is a single short pulse which indicates the start of every line. Two timing intervals are defined - the front porch between the end of displayed video and the start of the sync pulse, and the back porch after the sync pulse and before displayed video. These and the sync pulse itself are called the horizontal blanking (or retrace) interval and represent the time that the electron beam in the CRT is returning to the start of the next display line. The vertical sync signal is a series of much longer pulses, indicating the start of a new field. The sync pulses occupy the whole of line interval of a number of lines at the beginning and end of a scan; no picture information is transmitted during vertical retrace. The pulse sequence is designed to allow horizontal sync to continue during vertical retrace; it also indicates whether each field represents even or odd lines in interlaced systems (depending on whether it begins at the start of a horizontal line, or mid-way through). In the TV receiver, a sync separator circuit detects the sync voltage levels and sorts the pulses into horizontal and vertical sync. Loss of horizontal synchronization usually resulted in an unwatchable picture; loss of vertical synchronization would produce an image rolling up or down the screen.
In an analog receiver with a CRT display sync pulses are fed to horizontal and vertical timebase amplifier circuits. These generate modified sawtooth and parabola current waveforms to scan the electron beam in a linear way. The waveform shapes are necessary to make up for the distance variations from the electron beam source and the screen surface. Each beam direction switching circuit is reset by the appropriate sync timing pulse. These waveforms are fed to the horizontal and vertical scan coils wrapped around the CRT tube. These coils produce a magnetic field proportional to the changing current, and this deflects the electron beam across the screen. In the 1950s, television receiver timebase supply was derived directly from the mains supply. A simple circuit consisted of a series voltage dropper resistance and a rectifier valve (tube) or semiconductor diode. This avoided the cost of a large high voltage mains supply (50 or 60 Hz) transformer. This type of circuit was used for thermionic valve (tube) technology. It was inefficient and produced a lot of heat which led to premature failures in the circuitry. In the 1960s, semiconductor technology was introdued into timebase circuits. During the late 1960s in the U.K., synchronous, (with the scan line rate), power generation was introduced into solid state receiver designs.[12] These had very complex circuits in which faults were difficult to trace, but had very efficient use of power. In the early 1970s AC mains (50 Hz), and line timebase (15,625 Hz), thyristor based switching circuits were introduced. In the U.K. use of the simple (50 Hz) types of power circuits were discontinued. The reason for design changes arose from the electricity supply contamination problems arising from EMI,[13] and supply loading issues due to energy being taken from only the positive half cycle of the mains supply waveform.[14]
Most of the receiver's circuitry (at least in transistor- or IC-based designs) operates from a comparatively low-voltage DC power supply. However, the anode connection for a cathode-ray tube requires a very high voltage (typically 10-30 kV) for correct operation.
This voltage is not directly produced by the main power supply circuitry; instead the receiver makes use of the circuitry used for horizontal scanning. Direct current (DC), is switched though the line output transformer, and alternating current ([AC]) is induced into the scan coils. At the end of each horizontal scan line the magnetic field which has built up in both transformer and scan coils by the current, is a source of latent electromagnetic energy. This stored collapsing magnetic field energy can be captured. The reverse flow, short duration, (about 10% of the line scan time) current from both the line output transformer and the horizontal scan coil is discharged again into the primary winding of the flyback transformer by the use of a rectifier which blocks this negative reverse emf. A small value capacitor is connected across the scan switching device. This tunes the circuit inductances to resonate at a much higher frequency. This slows down (lengthens) the flyback time from the extremely rapid decay rate that would result if they were electrically isolated during this short period. One of the secondary windings on the flyback transformer then feeds this brief high voltage pulse to a Cockcroft design voltage multiplier. This produces the required EHT supply. A flyback converter is a power supply circuit operating on similar principles.
Typical modern design incorporates the flyback transformer and rectifier circuitry into a single unit with a captive output lead, (known as a diode split line output transformer),[15] so that all high-voltage parts are enclosed. Earlier designs used a separate line output transformer and a well insulated high voltage multiplier unit. The high frequency (15 kHz or so) of the horizontal scanning allows reasonably small components to be used.
|